234 research outputs found

    Number morphology on honorific nouns

    Get PDF
    Singular honorific nouns in Hindi, Punjabi and Marathi show interesting behavior with respect to number morphology. While they uniformly trigger plural agreement, we find that certain plural affixes occur on these nouns, but others do not. I propose a morphosyntactic analysis for this asymmetry. I argue that the two types of plural affixes realize different syntactic heads: the plural affixes that occur on singular honorific nouns realize n, while the others realize Num. Building on Bhatt & Davis (2021) and using a mechanism for feature copying within the nominal phrase, I propose a structure for singular honorific nouns that can capture this generalization

    Singular tum is not plural: a Distributed Morphology analysis of Hindi verb agreement

    Get PDF
    Hindi has a three-way honorificity contrast in the second person: low tuu vs. mid tum vs. honorific aap, and a two-way contrast between non-honorific and honorific DPs in the third person. Honorific DPs are said to be formally plural as they always trigger plural agreement, regardless of semantic number. In this context, I consider the formal number features associated with the non-honorific pronoun tum. Prior work has claimed that like honorific DPs, tum always bears formal plural features. This is motivated by the fact that in many cases, tum takes apparent plural morphology, regardless of semantic number. However, Bhatt & Keine (2018) note a puzzling exception to this generalization: tum takes the feminine singular affix -ii when semantically singular, and the feminine plural affix -ĩĩ when semantically plural. I account for this puzzle by assuming that only DPs that are honorific or semantically plural bear the formal plural feature. Since tum is not honorific, it does not bear this feature when it is semantically singular. I show that apparent plural morphology associated with tum can be accounted for if we assume this morphology is actually underspecified for number. The analysis is couched within a Distributed Morphology framework

    Ergative case assignment in Hindi-Urdu: Evidence from light verb compounds

    Get PDF
    Various accounts have been proposed for ergative/absolutive case-assignment in Hindi-Urdu (HU) within the Minimalist Program. (Ura 2006, Anand & Nevins 2006 etc.) Using facts about subject case-assignment in a particular type of light verb compound in HU as evidence, I propose a syntactic account for subject case-assignment in the language in general. This account relies on two claims: (i) absolutive case can be assigned by some I, V and v heads to the subject, or (in the case of v) to the object, and (ii) ergative case results from a special KP configuration, only grammatical when absolutive case cannot be assigned to the subject. I show that this proposal can also explain facts about verb agreement in the language

    Hindi nominal suffixes are bimorphemic: A Distributed Morphology analysis

    Get PDF
    This paper provides a Distributed Morphology (DM) analysis for Hindi nominal (noun and adjectival) inflection. Contra Singh & Sarma (2010), I argue that nominal suffixes contain two morphemes – a basic morpheme, and a restrictedly distributed additional morpheme. The presence of two different morphemes is especially evident when one compares noun and adjectival inflectional suffixes, which Singh & Sarma (2010) do not, since they only look at noun inflection.  I also show that the so-called adjectival inflectional suffixes are not limited to adjectives, and may occur on nouns, provided the noun is not at the right edge of the noun phrase. On the other hand, the regular noun inflection is only limited to nouns at the right edge of the noun phrase. This is demonstrated using a type of coordinative compound found in Hindi. Then, I take the fact that nouns can take either the regular noun inflection or the so-called "adjectival" inflection as motivation for a unified analysis for both sets of suffixes. I demonstrate that after undoing certain phonological rules, the difference between the "adjectival" and regular noun inflectional suffixes can be summarized by saying that the additional morpheme only surfaces in the regular noun inflectional suffixes. Finally, I provide vocabulary entries and morphological operations that can capture the facts about the distribution of the various basic and additional morphemes

    Distill to Delete: Unlearning in Graph Networks with Knowledge Distillation

    Full text link
    Graph unlearning has emerged as a pivotal method to delete information from a pre-trained graph neural network (GNN). One may delete nodes, a class of nodes, edges, or a class of edges. An unlearning method enables the GNN model to comply with data protection regulations (i.e., the right to be forgotten), adapt to evolving data distributions, and reduce the GPU-hours carbon footprint by avoiding repetitive retraining. Existing partitioning and aggregation-based methods have limitations due to their poor handling of local graph dependencies and additional overhead costs. More recently, GNNDelete offered a model-agnostic approach that alleviates some of these issues. Our work takes a novel approach to address these challenges in graph unlearning through knowledge distillation, as it distills to delete in GNN (D2DGN). It is a model-agnostic distillation framework where the complete graph knowledge is divided and marked for retention and deletion. It performs distillation with response-based soft targets and feature-based node embedding while minimizing KL divergence. The unlearned model effectively removes the influence of deleted graph elements while preserving knowledge about the retained graph elements. D2DGN surpasses the performance of existing methods when evaluated on various real-world graph datasets by up to 43.1%43.1\% (AUC) in edge and node unlearning tasks. Other notable advantages include better efficiency, better performance in removing target elements, preservation of performance for the retained elements, and zero overhead costs. Notably, our D2DGN surpasses the state-of-the-art GNNDelete in AUC by 2.4%2.4\%, improves membership inference ratio by +1.3+1.3, requires 10.2×10610.2\times10^6 fewer FLOPs per forward pass and up to 3.2×\mathbf{3.2}\times faster

    Contextual Care Protocol using Neural Networks and Decision Trees

    Full text link
    A contextual care protocol is used by a medical practitioner for patient healthcare, given the context or situation that the specified patient is in. This paper proposes a method to build an automated self-adapting protocol which can help make relevant, early decisions for effective healthcare delivery. The hybrid model leverages neural networks and decision trees. The neural network estimates the chances of each disease and each tree in the decision trees represents care protocol for a disease. These trees are subject to change in case of aberrations found by the diagnosticians. These corrections or prediction errors are clustered into similar groups for scalability and review by the experts. The corrections as suggested by the experts are incorporated into the model

    Evaluating Paraphrastic Robustness in Textual Entailment Models

    Full text link
    We present PaRTE, a collection of 1,126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing. We posit that if RTE models understand language, their predictions should be consistent across inputs that share the same meaning. We use the evaluation set to determine if RTE models' predictions change when examples are paraphrased. In our experiments, contemporary models change their predictions on 8-16\% of paraphrased examples, indicating that there is still room for improvement
    • …
    corecore